24 research outputs found

    Guided Depth Super-Resolution by Deep Anisotropic Diffusion

    Full text link
    Performing super-resolution of a depth image using the guidance from an RGB image is a problem that concerns several fields, such as robotics, medical imaging, and remote sensing. While deep learning methods have achieved good results in this problem, recent work highlighted the value of combining modern methods with more formal frameworks. In this work, we propose a novel approach which combines guided anisotropic diffusion with a deep convolutional network and advances the state of the art for guided depth super-resolution. The edge transferring/enhancing properties of the diffusion are boosted by the contextual reasoning capabilities of modern networks, and a strict adjustment step guarantees perfect adherence to the source image. We achieve unprecedented results in three commonly used benchmarks for guided depth super-resolution. The performance gain compared to other methods is the largest at larger scales, such as x32 scaling. Code for the proposed method will be made available to promote reproducibility of our results

    Automated avalanche mapping from SPOT 6/7 satellite imagery with deep learning: results, evaluation, potential and limitations

    Full text link
    Spatially dense and continuous information on avalanche occurrences is crucial for numerous safety-related applications such as avalanche warning, hazard zoning, hazard mitigation measures, forestry, risk management and numerical simulations. This information is today still collected in a non-systematic way by observers in the field. Current research has explored the application of remote sensing technology to fill this information gap by providing spatially continuous information on avalanche occurrences over large regions. Previous investigations have confirmed the high potential of avalanche mapping from remotely sensed imagery to complement existing databases. Currently, the bottleneck for fast data provision from optical data is the time-consuming manual mapping. In our study we deploy a slightly adapted DeepLabV3+, a state-of-the-art deep learning model, to automatically identify and map avalanches in SPOT 6/7 imagery from 24 January 2018 and 16 January 2019. We relied on 24 778 manually annotated avalanche polygons split into geographically disjointed regions for training, validating and testing. Additionally, we investigate generalization ability by testing our best model configuration on SPOT 6/7 data from 6 January 2018 and comparing it to avalanches we manually annotated for that purpose. To assess the quality of the model results, we investigate the probability of detection (POD), the positive predictive value (PPV) and the F1 score. Additionally, we assessed the reproducibility of manually annotated avalanches in a small subset of our data. We achieved an average POD of 0.610, PPV of 0.668 and an F1 score of 0.625 in our test areas and found an F1 score in the same range for avalanche outlines annotated by different experts. Our model and approach are an important step towards a fast and comprehensive documentation of avalanche periods from optical satellite imagery in the future, complementing existing avalanche databases. This will have a large impact on safety-related applications, making mountain regions safer

    FiLM-Ensemble: Probabilistic Deep Learning via Feature-wise Linear Modulation

    Full text link
    The ability to estimate epistemic uncertainty is often crucial when deploying machine learning in the real world, but modern methods often produce overconfident, uncalibrated uncertainty predictions. A common approach to quantify epistemic uncertainty, usable across a wide class of prediction models, is to train a model ensemble. In a naive implementation, the ensemble approach has high computational cost and high memory demand. This challenges in particular modern deep learning, where even a single deep network is already demanding in terms of compute and memory, and has given rise to a number of attempts to emulate the model ensemble without actually instantiating separate ensemble members. We introduce FiLM-Ensemble, a deep, implicit ensemble method based on the concept of Feature-wise Linear Modulation (FiLM). That technique was originally developed for multi-task learning, with the aim of decoupling different tasks. We show that the idea can be extended to uncertainty quantification: by modulating the network activations of a single deep network with FiLM, one obtains a model ensemble with high diversity, and consequently well-calibrated estimates of epistemic uncertainty, with low computational overhead in comparison. Empirically, FiLM-Ensemble outperforms other implicit ensemble methods, and it and comes very close to the upper bound of an explicit ensemble of networks (sometimes even beating it), at a fraction of the memory cost.Comment: accepted at NeurIPS 202

    Réseaux de neurones convolutifs pour l'analyse de changements en imagerie de télédétection avec des annotations bruitées et des décalages de domaine

    No full text
    The analysis of satellite and aerial Earth observation images allows us to obtain precise information over large areas. A multitemporal analysis of such images is necessary to understand the evolution of such areas. In this thesis, convolutional neural networks are used to detect and understand changes using remote sensing images from various sources in supervised and weakly supervised settings. Siamese architectures are used to compare coregistered image pairs and to identify changed pixels. The proposed method is then extended into a multitask network architecture that is used to detect changes and perform land cover mapping simultaneously, which permits a semantic understanding of the detected changes. Then, classification filtering and a novel guided anisotropic diffusion algorithm are used to reduce the effect of biased label noise, which is a concern for automatically generated large-scale datasets. Weakly supervised learning is also achieved to perform pixel-level change detection using only image-level supervision through the usage of class activation maps and a novel spatial attention layer. Finally, a domain adaptation method based on adversarial training is proposed, which succeeds in projecting images from different domains into a common latent space where a given task can be performed. This method is tested not only for domain adaptation for change detection, but also for image classification and semantic segmentation, which proves its versatility.L'analyse de l'imagerie satellitaire et aérienne d'observation de la Terre nous permet d'obtenir des informations précises sur de vastes zones. Une analyse multitemporelle de telles images est nécessaire pour comprendre l'évolution de ces zones. Dans cette thèse, les réseaux de neurones convolutifs sont utilisés pour détecter et comprendre les changements en utilisant des images de télédétection provenant de diverses sources de manière supervisée et faiblement supervisée. Des architectures siamoises sont utilisées pour comparer des paires d'images recalées et identifier les pixels correspondant à des changements. La méthode proposée est ensuite étendue à une architecture de réseau multitâche qui est utilisée pour détecter les changements et effectuer une cartographie automatique simultanément, ce qui permet une compréhension sémantique des changements détectés. Ensuite, un filtrage de classification et un nouvel algorithme de diffusion anisotrope guidée sont utilisés pour réduire l'effet du bruit d'annotation, un défaut récurrent pour les ensembles de données à grande échelle générés automatiquement. Un apprentissage faiblement supervisé est également réalisé pour effectuer une détection de changement au niveau des pixels en utilisant uniquement une supervision au niveau de l'image grâce à l'utilisation de cartes d'activation de classe et d'une nouvelle couche d'attention spatiale. Enfin, une méthode d'adaptation de domaine fondée sur un entraînement adverse est proposée. Cette méthode permet de projeter des images de différents domaines dans un espace latent commun où une tâche donnée peut être effectuée. Cette méthode est testée non seulement pour l'adaptation de domaine pour la détection de changement, mais aussi pour la classification d'images et la segmentation sémantique, ce qui prouve sa polyvalence

    Projeto, otimização e simulação de uma biblioteca de células digitais de baixa potência

    No full text
    Monografia (graduação)—Universidade de Brasília, Faculdade de Tecnologia, 2015.Este trabalho reúne e organiza um conjunto de técnicas para o desenvolvimento, otimização e validação de bibliotecas de células digitais (Standard Cell Libraries - SCLs) para utilização em circuitos de baixa potência. São apresentadas orientações gerais para criação de SCLs em diferentes tecnologias, assim como as limitações das técnicas apresentadas. Ao m do trabalho é criada uma pequena SCL na tecnologia XC06 da X-FAB para se veri car e ilustrar as informações apresentadas anteriormente. Este trabalho tem como objetivo servir como uma base concreta e completa para futuros desenvolvimentos de SCLs de baixa potência.This work compiles and organizes a set of techniques for development, optimization and validation of standard cell libraries (SCLs) aimed for low power circuits. It presents general guidelines for SCL development in di erent technologies and discusses the limitations of these techniques. The nal part of this work consists in the design of a small SCL on XC06 technology from X-FAB for veri cation and as example of the information previously presented. This work's objective is to serve as a concrete and complete foundation for future low power SCL developmen

    Robust Damage Estimation of Typhoon Goni on Coconut Crops with Sentinel-2 Imagery

    Full text link
    Typhoon Goni crossed several provinces in the Philippines where agriculture has high socioeconomic importance, including the top-3 provinces in terms of planted coconut trees. We have used a computational model to infer coconut tree density from satellite images before and after the typhoon’s passage, and in this way estimate the number of damaged trees. Our area of study around the typhoon’s path covers 15.7 Mha, and includes 47 of the 87 provinces in the Philippines. In validation areas our model predicts coconut tree density with a Mean Absolute Error of 5.9 Trees/ha. In Camarines Sur we estimated that 3.5 M of the 4.6 M existing coconut trees were damaged by the typhoon. Overall we estimated that 14.1 M coconut trees were affected by the typhoon inside our area of study. Our validation images confirm that trees are rarely uprooted and damages are largely due to reduced canopy cover of standing trees. On validation areas, our model was able to detect affected coconut trees with 88.6% accuracy, 75% precision and 90% recall. Our method delivers spatially fine-grained change maps for coconut plantations in the area of study, including unchanged, damaged and new trees. Beyond immediate damage assessment, gradual changes in coconut density may serve as a proxy for future changes in yield
    corecore